To solve the competitive facility location problem of new energy vehicle battery recycling outlets considering queuing theory, an Improved Human Learning Optimization (IHLO) algorithm was proposed. First, the competitive facility location model of new energy vehicle battery recycling outlets was constructed, which included queuing time constraints, capacity constraints, threshold constraints and other constraints. Then, considering that this problem belongs to NP-hard problem, in view of the shortcomings of Human Learning Optimization (HLO) algorithm, such as low convergence speed,optimization accuracy and solving stability in the early stage, IHLO algorithm was proposed by adopting elite population reverse learning strategy, group mutual learning operator and adaptive strategy of harmonic parameter. Finally, taking Shanghai and the Yangtze River Delta as examples for numerical experiments, IHLO was compared with Improved Binary Grey Wolf Optimization (IBGWO) algorithm, Improved Binary Particle Swarm Optimization (IBPSO) algorithm, HLO and Human Learning Optimization based on Learning Psychology (LPHLO) algorithm. For large, medium and small scales, the experimental results show that IHLO algorithm has the best performance in 14 of the 15 indicators; compared with IBGWO algorithm, the solution accuracy of IHLO algorithm is improved by at least 0.13%, the solution stability is improved by at least 10.05%, and the solution speed is improved by at least 17.48%. The results show that the proposed algorithm has high computational accuracy and fast optimization speed, which can effectively solve the competitive facility location problem.
Two optimization methods for quantum simulator implemented on Sunway supercomputer were proposed aiming at the problems of gradual scaling of quantum hardware and insufficient classical simulation speed. Firstly, the tensor contraction operator library SWTT was reconstructed by improving the tensor transposition strategy and computation strategy, which improved the computing kernel efficiency of partial tensor contraction and reduced redundant memory access. Secondly, the balance between complexity and efficiency of path computation was achieved by the contraction path adjustment method based on data locality optimization. Test results show that the improvement method of operator library can improve the simulation efficiency of the "Sycamore" quantum supremacy circuit by 5.4% and the single-step tensor contraction efficiency by up to 49.7 times; the path adjustment method can improve the floating-point efficiency by about 4 times with the path computational complexity inflated by a factor of 2. The two optimization methods have the efficiencies of single-precision and mixed-precision floating-point operations for the simulation of Google’s 53-bit, 20-layer quantum chip random circuit with a million amplitude sampling improved from 3.98% and 1.69% to 18.48% and 7.42% respectively, and reduce the theoretical estimated simulation time from 470 s to 226 s for single-precision and 304 s to 134 s for mixed-precision, verifying that the two methods significantly improve the quantum computational simulation speed.
Aiming at the problems of low optimization accuracy and slow convergence of Simple Human Learning Optimization (SHLO) algorithm, a new Human Learning Optimization algorithm based on Learning Psychology (LPHLO) was proposed. Firstly, based on Team-Based Learning (TBL) theory in learning psychology, the TBL operator was introduced, so that on the basis of individual experience and social experience, team experience was added to control individual learning state to avoid the premature convergence of algorithm. Then, the memory coding theory was combined to propose the dynamic parameter adjustment strategy, thereby effectively integrating the individual information, social information and team information. And the abilities of the algorithm to explore locally and develop globally were better balanced. Two examples of knapsack problem of typical combinatorial optimization problems, 0-1 knapsack problem and multi-constraint knapsack problem, were selected for simulation experiments. Experimental results show that, compared with the algorithms such as SHLO algorithm, Genetic Algorithm (GA) and Binary Particle Swarm Optimization (BPSO) algorithm, the proposed LPHLO has more advantages in optimization accuracy and convergence speed, and has a better ability to solve the practical problems.
Designing a unified solution to the combinational optimization problems of undesigned heuristic algorithms has become a research hotspot in the field of machine learning. At present, mature technologies are mainly aiming at static combinatorial optimization problems, but the combinational optimization problems with dynamic changes are not fully solved. In order to solve above problems, a lightweight model called Dy4TSP (Dynamic model for Traveling Salesman Problems) was proposed, which combined multi-head-attention mechanism with distributed reinforcement learning to solve the traveling salesman problem on a dynamic graph. Firstly, the node representation vector from graph convolution neural network was processed by the prediction network based on multi-head-attention mechanism. Then, the distributed reinforcement learning algorithm was used to quickly predict the possibility that each node in the graph was output as the optimal solution, and the optimal solution space of the problems in different possibilities were comprehensively explored. Finally, the action decision sequence which could meet the specific reward function in real time was generated by the trained model. The model was evaluated on three typical combinatorial optimization problems, and the experimental results showed that the solution qualities of the proposed model are 0.15 to 0.37 units higher than those of the open source solver LKH3 (Lin-Kernighan-Helsgaun 3), and are significantly better than those of the latest algorithms such as Graph Attention Network with Edge Embedding (EGATE). The proposed model can reach an optimal path gap of 0.1 to 1.05 in other dynamic traveling salesman problems, and the results are slightly better.
Aiming at the problem that the clustering results of K-Means clustering algorithm are affected by the sample distribution because of using the mean to update the cluster centers, a Neural Tangent Kernel K-Means (NTKKM) clustering algorithm was proposed. Firstly, the data of the input space were mapped to the high-dimensional feature space through the Neural Tangent Kernel (NTK), then the K-Means clustering was performed in the high-dimensional feature space, and the cluster centers were updated by taking into account the distance between clusters and within clusters at the same time. Finally, the clustering results were obtained. On the car and breast-tissue datasets, three evaluation indexes including accuracy, Adjusted Rand Index (ARI) and FM index of NTKKM clustering algorithm and comparison algorithms were counted. Experimental results show that the effect of clustering and the stability of NTKKM clustering algorithm are better than those of K-Means clustering algorithm and Gaussian kernel K?Means clustering algorithm. Compared with the traditional K?Means clustering algorithm, NTKKM clustering algorithm has the accuracy increased by 14.9% and 9.4% respectively, the ARI increased by 9.7% and 18.0% respectively, and the FM index increased by 12.0% and 12.0% respectively, indicating the excellent clustering performance of NTKKM clustering algorithm.
The traditional Graph Convolutional Network (GCN) and many of its variants achieve the best effect in the shallow layers, and do not make full use of higher-order neighbor information of nodes in the graph. The subsequent deep graph convolution models can solve the above problem, but inevitably generate the problem of over-smoothing, which makes the models impossible to effectively distinguish different types of nodes in the graph. To address this problem, an adaptive deep graph convolution model using initial residual and decoupling operations, named ID-AGCN (model using Initial residual and Decoupled Adaptive Graph Convolutional Network), was proposed. Firstly, the node’s representation transformation as well as feature propagation was decoupled. Then, the initial residual was added to the node’s feature propagation process. Finally, the node representations obtained from different propagation layers were combined adaptively, appropriate local and global information was selected for each node to obtain node representations containing rich information, and a small number of labeled nodes were used for supervised training to generate the final node representations. Experimental result on three datasets Cora, CiteSeer and PubMed indicate that the classification accuracy of ID-AGCN is improved by about 3.4 percentage points, 2.3 percentage points and 1.9 percentage points respectively, compared with GCN. The proposed model has superiority in alleviating over-smoothing.
Multi-kernel learning method is an important type of kernel learning method, but most of multi-kernel learning methods have the following problems: most of the basis kernel functions in multi-kernel learning methods are traditional kernel functions with shallow structure, which have weak representation ability when dealing with the problems of large data scale and uneven distribution; the generalization error convergence rates of the existing multi-kernel learning methods are mostly O 1 / n , and the convergence speeds are slow. Therefore, a multi-kernel learning method based on Neural Tangent Kernel (NTK) was proposed. Firstly, the NTK with deep structure was used as the basis kernel function of the multi-kernel learning method, so as to enhance the representation ability of the multi-kernel learning method. Then, a generalization error bound with a convergence rate of O 1 / n was proved based on the measure of principal eigenvalue ratio. On this basis, a new multi-kernel learning algorithm was designed in combination with the kernel alignment measure. Finally, experiments were carried out on several datasets. Experimental results show that compared with classification algorithms such as Adaboost and K-Nearest Neighbor (KNN), the newly proposed multi-kernel learning algorithm has higher accuracy and better representation ability, which also verifies the feasibility and effectiveness of the proposed method.